Fix tokenizer mismatch between benchmark client and sglang server #937
Fix tokenizer mismatch between benchmark client and sglang server #937cquil11 merged 3 commits intoSemiAnalysisAI:mainfrom
Conversation
…transformers v5 Transformers v5 incorrectly rebuilds pre_tokenizer/decoder components for models like DeepSeek-R1 that use LlamaTokenizerFast with a non-Llama tokenizer architecture. The sglang server fixes this at startup, but the benchmark client loads the tokenizer without these fixes, causing a ~5x token count inflation (e.g. 7000 tokens -> 35000 tokens) and false performance regressions in TTFT and throughput benchmarks. Apply the same tokenizer fixes (pre_tokenizer/decoder restoration and add_bos_token recovery) that sglang server applies, so client and server tokenize identically. No-op on transformers v4. Made-with: Cursor
|
thanks @ZhaiFeiyue is for all models in sglang or just for glm5? |
|
@ZhaiFeiyue Thank you for identifying this error and for the fix. +1 to functionstackx comment. It will be helpful to identify which configs from amd-master.yaml and nvidia-master.yaml are affected by this so that we can re-run and get accurate perf across the board. +viz @kedarpotdar-nv |
hi @functionstackx we only found this issue for deepseek-R1, not do any test for glm5 |
Hi @cquil11 now no configs are affected by the patch. Only after we upgrade the sglang to latest main, then we will observe the issue. So it is a proactive protection. |
|
@claude Review |
|
I'll analyze this and get back to you. |
cquil11
left a comment
There was a problem hiding this comment.
OK. I see that the same change has indeed been applied in SGLang upstream, https://github.com/sgl-project/sglang/blob/9238bd08a2895fa3b7ec79ea567e5c27ac951343/python/sglang/srt/utils/hf_transformers_utils.py#L836
LGTM
| from tokenizers import Tokenizer as RawTokenizer | ||
| tok_file = Path(model_path) / "tokenizer.json" |
There was a problem hiding this comment.
🔴 The tokenizer fix is silently skipped when a HuggingFace Hub model ID (e.g. 'deepseek-ai/DeepSeek-R1') is passed instead of a local directory path. Path('deepseek-ai/DeepSeek-R1') / 'tokenizer.json' constructs a relative path that doesn't exist on disk, so both tok_file.is_file() and config_file.is_file() return False and the entire fix is bypassed. Since get_model() returns the model ID unchanged when ModelScope is not in use, this covers the common case for most benchmark users — the tokenizer mismatch this PR is intended to fix will not be corrected for them.
Extended reasoning...
What the bug is and how it manifests
The _fix_tokenizer_for_sglang function reads tokenizer configuration files from the local filesystem using Path(model_path) / "tokenizer.json" and Path(model_path) / "tokenizer_config.json". Both reads are guarded by .is_file() checks. If model_path is a HuggingFace Hub model ID like 'deepseek-ai/DeepSeek-R1', neither path exists on disk, so both checks return False and both fixes — the pre_tokenizer/decoder restoration and the add_bos/add_eos recovery — are silently skipped. The function returns the tokenizer unchanged.
The specific code path that triggers it
In get_tokenizer() (line ~517), when pretrained_model_name_or_path is not a local directory, get_model() is called. Without VLLM_USE_MODELSCOPE=true, get_model() simply returns the input string unchanged (lines 439-441). AutoTokenizer.from_pretrained('deepseek-ai/DeepSeek-R1') then succeeds by downloading to the HuggingFace cache. Finally, _fix_tokenizer_for_sglang(tokenizer, 'deepseek-ai/DeepSeek-R1') is called with the Hub ID as model_path. Inside, Path('deepseek-ai/DeepSeek-R1') / 'tokenizer.json' resolves to the relative path deepseek-ai/DeepSeek-R1/tokenizer.json in the current working directory — which does not exist.
Why existing code doesn't prevent it
The code has no fallback to resolve the HF cache location. The is_file() guards are appropriate for local paths but become a silent no-op for Hub IDs. There is no warning, log message, or exception — the function just returns the unmodified tokenizer. Because AutoTokenizer.from_pretrained succeeds silently using the HF cache, callers have no indication the fix was skipped.
What the impact would be
Any user who passes a HuggingFace Hub model ID to the benchmark script (the default and most common usage pattern) will get an unfixed tokenizer. The benchmark client will still tokenize text differently from the sglang server — the exact mismatch this PR intends to resolve. For DeepSeek-R1, this means the client sees ~7,000 tokens but the server sees ~35,000 tokens, causing ~5x TTFT inflation and ~24% throughput regression as described in the PR description. The fix only works if users explicitly pre-download models to a local path.
How to fix it
After loading the tokenizer, resolve the local cache path before calling _fix_tokenizer_for_sglang. The simplest approach is to use tokenizer.name_or_path (which AutoTokenizer populates with the resolved cache directory) or call huggingface_hub.snapshot_download(model_id, local_files_only=True) to get the cache path without triggering a re-download. For example: _fix_tokenizer_for_sglang(tokenizer, tokenizer.name_or_path).
Step-by-step proof
- User runs benchmark with
--model deepseek-ai/DeepSeek-R1(no local path). get_tokenizer('deepseek-ai/DeepSeek-R1')is called.os.path.exists('deepseek-ai/DeepSeek-R1')→False.get_model('deepseek-ai/DeepSeek-R1')→ returns'deepseek-ai/DeepSeek-R1'(ModelScope not set).AutoTokenizer.from_pretrained('deepseek-ai/DeepSeek-R1')→ succeeds, tokenizer loaded from~/.cache/huggingface/hub/...._fix_tokenizer_for_sglang(tokenizer, 'deepseek-ai/DeepSeek-R1')called.Path('deepseek-ai/DeepSeek-R1') / 'tokenizer.json'→PosixPath('deepseek-ai/DeepSeek-R1/tokenizer.json').tok_file.is_file()→False→ pre_tokenizer/decoder fix skipped.config_file.is_file()→False→ add_bos/add_eos fix skipped.- Tokenizer returned with v5 corrupted pre_tokenizer (
Metaspaceinstead ofSequence) and decoder (Sequenceinstead ofByteLevel). - Client tokenizes 7,000-token prompt → server receives and re-tokenizes to ~35,000 tokens → 5x TTFT inflation.
| loaded_pre = type(backend.pre_tokenizer).__name__ if backend.pre_tokenizer else None | ||
| if raw_pre and loaded_pre and raw_pre != loaded_pre: | ||
| backend.pre_tokenizer = raw.pre_tokenizer |
There was a problem hiding this comment.
🟡 In _fix_tokenizer_for_sglang, backend.decoder = raw.decoder is assigned unconditionally without checking if raw.decoder is None. If a tokenizer.json defines a pre_tokenizer but no decoder, this silently nulls out the existing working decoder on an already-loaded tokenizer. A one-line guard (if raw.decoder is not None) would make this assignment safe.
Extended reasoning...
What the bug is: At lines 477-478, when a pre_tokenizer type mismatch is detected, the code unconditionally assigns:
backend.pre_tokenizer = raw.pre_tokenizer
backend.decoder = raw.decoderThe entry condition (if raw_pre and loaded_pre and raw_pre \!= loaded_pre) only validates that both tokenizers have a pre_tokenizer component — it does not check whether raw.decoder is non-None before performing the decoder assignment.
The specific code path: If tokenizer.json defines a pre_tokenizer but omits a decoder key (or sets it to null), raw.decoder will be None. The code will then execute backend.decoder = None, silently overwriting a previously valid decoder on the already-loaded tokenizer.
Why existing guards don't prevent it: The condition guards raw.pre_tokenizer (via raw_pre) and backend.pre_tokenizer (via loaded_pre) but makes no assertion about raw.decoder. The guard structure was designed to check pre_tokenizer consistency, not decoder availability. The surrounding try/except Exception: pass also means any later decoding failure — which would be a downstream consequence, not a local exception — would be completely silent and hard to trace.
Impact: Any tokenizer.json that specifies a pre_tokenizer without a decoder would silently break all token-to-text decoding after this function runs. Since the failure happens after the function returns, it would manifest as garbage output or crashes far from the assignment site, making the root cause very difficult to diagnose.
Step-by-step proof:
- A model's
tokenizer.jsoncontains:{"pre_tokenizer": {"type": "Sequence", ...}}but no"decoder"key. raw = RawTokenizer.from_file(str(tok_file))loads this file;raw.decoderisNone.raw_pre = "Sequence",loaded_pre = "Metaspace"(v5 replacement) → condition is True.backend.pre_tokenizer = raw.pre_tokenizer— correct.backend.decoder = raw.decoder→backend.decoder = None— decoder destroyed silently.- Any subsequent
tokenizer.decode(token_ids)call now fails or produces empty output.
Addressing the refutation: The refutation correctly notes that real-world tokenizer.json files with a pre_tokenizer almost universally also define a decoder (DeepSeek-R1 does have a ByteLevel decoder). In practice this scenario is unlikely to trigger for the intended targets. However, the defensive None-check is a one-line fix with zero downside, and code that silently corrupts state depending on an implicit invariant of external files is fragile. The missing check is a real defect, even if it is low severity.
Fix:
if raw_pre and loaded_pre and raw_pre \!= loaded_pre:
backend.pre_tokenizer = raw.pre_tokenizer
if raw.decoder is not None:
backend.decoder = raw.decoder
Summary
Problem
When benchmarking sglang server performance with the latest sgalng (which uses transformers 5.3.0), we observed a significant performance regression compared to the older
0313-2image (transformers 4.57.1):Root Cause
Transformers v5 changed how
LlamaTokenizerFastis loaded. During__init__, v5 rebuilds thepre_tokenizeranddecoderfrom scratch using Llama-specific components, discarding the originals defined intokenizer.json. For models like DeepSeek-R1 that declareLlamaTokenizerFastbut actually use a ByteLevel/Sequence tokenizer architecture, v5 incorrectly replaces:pre_tokenizer:Sequence→Metaspacedecoder:ByteLevel→SequenceThe sglang server already fixes this at startup via
_fix_v5_tokenizer_components()inhf_transformers_utils.py, but the benchmark client loads the tokenizer directly viaAutoTokenizer.from_pretrained()without these fixes. This mismatch causes the client and server to tokenize text differently:The ~5x token inflation on the server side results in 5x more prefill work, which directly causes the observed TTFT and throughput regression.
Fix
Add
_fix_tokenizer_for_sglang()to the benchmark client'sget_tokenizer(), applying the same two fixes that sglang server applies:tokenizer.jsonand overwrite the incorrect v5 replacements.add_bos_token/add_eos_tokenflags fromtokenizer_config.jsonthat v5 strips during loading.This is a no-op on transformers v4.
Results
Token count (per request, ~7400 token input):
Benchmark (10 prompts, concurrency=1, input ~8K, output ~1K,
--disable-radix-cache):After fix, new image performance matches old image — no actual regression.